Colored-speech synaesthesia is triggered by multisensory, not unisensory, perception.
نویسندگان
چکیده
Although it is estimated that as many as 4% of people experience some form of enhanced cross talk between (or within) the senses, known as synaesthesia, very little is understood about the level of information processing required to induce a synaesthetic experience. In work presented here, we used a well-known multisensory illusion called the McGurk effect to show that synaesthesia is driven by late, perceptual processing, rather than early, unisensory processing. Specifically, we tested 9 linguistic-color synaesthetes and found that the colors induced by spoken words are related to what is perceived (i.e., the illusory combination of audio and visual inputs) and not to the auditory component alone. Our findings indicate that color-speech synaesthesia is triggered only when a significant amount of information processing has occurred and that early sensory activation is not directly linked to the synaesthetic experience.
منابع مشابه
The Effect of Combined Sensory and Semantic Components on Audio–Visual Speech Perception in Older Adults
Previous studies have found that perception in older people benefits from multisensory over unisensory information. As normal speech recognition is affected by both the auditory input and the visual lip movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in...
متن کاملDoes audiovisual speech offer a fountain of youth for old ears? An event-related brain potential study of age differences in audiovisual speech perception.
The current study addressed the question whether audiovisual (AV) speech can improve speech perception in older and younger adults in a noisy environment. Event-related potentials (ERPs) were recorded to investigate age-related differences in the processes underlying AV speech perception. Participants performed an object categorization task in three conditions, namely auditory-only (A), visual-...
متن کاملA dynamical framework to relate perceptual variability with multisensory information processing
Multisensory processing involves participation of individual sensory streams, e.g., vision, audition to facilitate perception of environmental stimuli. An experimental realization of the underlying complexity is captured by the "McGurk-effect"- incongruent auditory and visual vocalization stimuli eliciting perception of illusory speech sounds. Further studies have established that time-delay be...
متن کاملThe Default Mode of Primate Vocal Communication and Its Neural Correlates
It’s been argued that the integration of the visual and auditory channels during human speech perception is the default mode of speech processing (Rosenblum, 2005). That is, speech perception is not a capacity that is ‘piggybacked’ on to auditory-only speech perception. Visual information from the mouth and other parts of the face is used by all perceivers and readily integrates with auditory s...
متن کاملMultisensory and sensorimotor interactions in speech perception
This research topic presents speech as a natural, well-learned, multisensory communication signal, processed by multiple mechanisms. Reflecting the general status of the field, most articles focus on audiovisual speech perception and many utilize the McGurk effect, which arises when discrepant visual and auditory speech stimuli are presented (McGurk and MacDonald, 1976). Tiippana (2014) argues ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Psychological science
دوره 20 5 شماره
صفحات -
تاریخ انتشار 2009